67 research outputs found

    Delayed theory combination vs. Nelson-Oppen for satisfiability modulo theories: a comparative analysis

    Get PDF
    Most state-of-the-art approaches for Satisfiability Modulo Theories (SMT(T))(SMT(\mathcal{T})) rely on the integration between a SAT solver and a decision procedure for sets of literals in the background theory T(T-solver)\mathcal{T} (\mathcal{T}{\text {-}}solver) . Often T\mathcal{T} is the combination T1∪T2\mathcal{T}_1 \cup \mathcal{T}_2 of two (or more) simpler theories (SMT(T1∪T2))(SMT(\mathcal{T}_1 \cup \mathcal{T}_2)) , s.t. the specific Ti-solvers{\mathcal{T}_i}{\text {-}}solvers must be combined. Up to a few years ago, the standard approach to SMT(T1∪T2)SMT(\mathcal{T}_1 \cup \mathcal{T}_2) was to integrate the SAT solver with one combined T1∪T2-solver\mathcal{T}_1 \cup \mathcal{T}_2{\text {-}}solver , obtained from two distinct Ti-solvers{\mathcal{T}_i}{\text {-}}solvers by means of evolutions of Nelson and Oppen's (NO) combination procedure, in which the Ti-solvers{\mathcal{T}_i}{\text {-}}solvers deduce and exchange interface equalities. Nowadays many state-of-the-art SMT solvers use evolutions of a more recent SMT(T1∪T2)SMT(\mathcal{T}_1 \cup \mathcal{T}_2) procedure called Delayed Theory Combination (DTC), in which each Ti-solver{\mathcal{T}_i}{\text {-}}solver interacts directly and only with the SAT solver, in such a way that part or all of the (possibly very expensive) reasoning effort on interface equalities is delegated to the SAT solver itself. In this paper we present a comparative analysis of DTC vs. NO for SMT(T1∪T2)SMT(\mathcal{T}_1 \cup \mathcal{T}_2) . On the one hand, we explain the advantages of DTC in exploiting the power of modern SAT solvers to reduce the search. On the other hand, we show that the extra amount of Boolean search required to the SAT solver can be controlled. In fact, we prove two novel theoretical results, for both convex and non-convex theories and for different deduction capabilities of the Ti-solvers{\mathcal{T}_i}{\text {-}}solvers , which relate the amount of extra Boolean search required to the SAT solver by DTC with the number of deductions and case-splits required to the Ti-solvers{\mathcal{T}_i}{\text {-}}solvers by NO in order to perform the same tasks: (i) under the same hypotheses of deduction capabilities of the Ti-solvers{\mathcal{T}_i}{\text {-}}solvers required by NO, DTC causes no extra Boolean search; (ii) using Ti-solvers{\mathcal{T}_i}{\text {-}}solvers with limited or no deduction capabilities, the extra Boolean search required can be reduced down to a negligible amount by controlling the quality of the T\mathcal{T} -conflict sets returned by the ${\mathcal{T}_i}{\text {-}}solvers

    Quantitative proteomics at different depths in human articular cartilage reveals unique patterns of protein distribution

    Get PDF
    The articular cartilage of synovial joints ensures friction-free mobility and attenuates mechanical impact on the joint during movement. These functions are mediated by the complex network of extracellular molecules characteristic for articular cartilage. Zonal differences in the extracellular matrix (ECM) are well recognized. However, knowledge about the precise molecular composition in the different zones remains limited. In the present study, we investigated the distribution of ECM molecules along the surface-to-bone axis, using quantitative non-targeted as well as targeted proteomics.\ In a discovery approach, iTRAQ mass spectrometry was used to identify all extractable ECM proteins in the different layers of a human lateral tibial plateau full thickness cartilage sample. A targeted MRM mass spectrometry approach was then applied to verify these findings and to extend the analysis to four medial tibial plateau samples. In the lateral tibial plateau sample, the unique distribution patterns of 70 ECM proteins were identified, revealing groups of proteins with a preferential distribution to the superficial, intermediate or deep regions of articular cartilage. The detailed analysis of selected 29 proteins confirmed these findings and revealed similar distribution patterns in the four medial tibial plateau samples. The results of this study allow, for the first time, an overview of the zonal distribution of a broad range of cartilage ECM proteins and open up further investigations of the functional roles of matrix proteins in the different zones of articular cartilage in health and disease

    Modeling and Analyzing Contextual Requirements

    Get PDF
    The relation between contexts and requirements can be very complex to analyze. A context can motivate a requirement, a requirement can be satisfied only in a specific context, and a context can influence the quality of each possible alternative of satisfying a requirement. To capture and deeply understand this relation, we need to start from the reasons of a requirement, namely stakeholders goals, and analyze at this level the system variability with respect to the context. In this paper, we propose a goal-based approach to model and analyze contextual requirements. We adopt Tropos goal modeling framework where we introduce contextual variation points and provide a set of constructs to hierarchically analyze contexts. We also articulate a new problem, the context interaction problem; we study its influence on both monitoring and functional requirements, and we finally provide a SAT-based approach to deal with it. We show the process for creating our models and illustrate our approach on a museum-guide scenario

    Delayed Theory Combination vs. Nelson-Oppen for Satisfiability Modulo Theories: a Comparative Analysis

    Get PDF
    Most state-of-the-art approaches for Satisfiability Modulo Theory (SMT(T)) rely on the integration between a SAT solver and a decision procedure for sets of literals in the background theory T (T-solver). Often T is the combination (T1 U T2) of two (or more) simpler theories (SMT(T1 U T2)), s.t. the specific Ti-Solvers must be combined. Up to a few years ago, the standard approach to SMT(T1 U T2) was to integrate the SAT solver with one combined (T1 U T2)-solver, obtained from two distinct Ti-Solvers by means of evolutions of Nelson and Oppen's (NO) combination procedure, in which the Ti-Solvers deduce and exchange interface equalities. Nowadays many state-of-the-art SMT solvers use evolutions of a more recent SMT(T1 U T2) procedure called Delayed Theory Combination (DTC), in which each Ti-Solver interacts directly and only with the SAT solver, in such a way that part or all of the (possibly very expensive) reasoning effort on interface equalities is delegated to the SAT solver itself. In this paper we present a comparative analysis of DTC vs. NO for SMT(T1 U T2). On the one hand, we explain the advantages of DTC in exploiting the power of modern SAT solvers to reduce the search. On the other hand, we show that the extra amount of Boolean search required to the SAT solver can be controlled. In fact, we prove two novel theoretical results, for both convex and non-convex theories and for different deduction capabilities of the Ti-Solvers, which relate the amount of extra Boolean search required to the SAT solver by DTC with the number of deductions and case-splits required to the Ti-Solvers by NO in order to perform the same tasks: (i) under the same hypotheses of deduction capabilities of the Ti-Solvers required by NO, DTC causes no extra Boolean search; (ii) using Ti-Solvers with limited or no deduction capabilities, the amount of extra Boolean search required can be reduced down to a negligible amount by controlling the quality of the T-conflict sets returned by the T-solvers

    Optimale hydrofoil-konfigurasjoner på arbeidsbåter i HAVtruck-klassen

    No full text

    Tighter Integration of {BDDs} and {SMT} for Predicate Abstraction

    No full text
    International audienceWe address the problem of computing the exact abstraction of a program with respect to a given set of predicates, a key computation step in Counter-Example Guided Abstraction Refinement. We build on a recently proposed approach that integrates BDD-based quantification techniques with SMT-based constraint solving to compute the abstraction. We extend the previous work in three main directions. First, we propose a much tighter integration of the BDD-based and SMT-based reasoning where the two solvers strongly collaborate to guide the search. Second, we propose a technique to reduce redundancy in the search by blocking already visited models. Third, we present an algorithm exploiting a conjunctively partitioned representation of the formula to quantify. This algorithm provides a general framework where all the presented optimizations integrate in a natural way. Moreover, it allows to overcome the limitations of the original approach that used a monolithic BDD representation of the formula to quantify. We experimentally evaluate the merits of the proposed optimizations, and show how they allow to significantly improve over previous approaches

    Tighter Integration of {BDDs} and {SMT} for Predicate Abstraction

    No full text
    International audienceWe address the problem of computing the exact abstraction of a program with respect to a given set of predicates, a key computation step in Counter-Example Guided Abstraction Refinement. We build on a recently proposed approach that integrates BDD-based quantification techniques with SMT-based constraint solving to compute the abstraction. We extend the previous work in three main directions. First, we propose a much tighter integration of the BDD-based and SMT-based reasoning where the two solvers strongly collaborate to guide the search. Second, we propose a technique to reduce redundancy in the search by blocking already visited models. Third, we present an algorithm exploiting a conjunctively partitioned representation of the formula to quantify. This algorithm provides a general framework where all the presented optimizations integrate in a natural way. Moreover, it allows to overcome the limitations of the original approach that used a monolithic BDD representation of the formula to quantify. We experimentally evaluate the merits of the proposed optimizations, and show how they allow to significantly improve over previous approaches
    • …
    corecore